70 research outputs found

    A scalable dataflow accelerator for real time onboard hyperspectral image classification

    No full text
    © Springer International Publishing Switzerland 2016.Real-time hyperspectral image classification is a necessary primitive in many remotely sensed image analysis applications. Previous work has shown that Support Vector Machines (SVMs) can achieve high classification accuracy, but unfortunately it is very computationally expensive. This paper presents a scalable dataflow accelerator on FPGA for real-time SVM classification of hyperspectral images.To address data dependencies, we adapt multi-class classifier based on Hamming distance. The architecture is scalable to high problem dimensionality and available hardware resources. Implementation results show that the FPGA design achieves speedups of 26x, 1335x, 66x and 14x compared with implementations on ZYNQ, ARM, DSP and Xeon processors. Moreover, one to two orders of magnitude reduction in power consumption is achieved for the AVRIS hyperspectral image datasets

    Single-shot compressed ultrafast photography at one hundred billion frames per second

    Get PDF
    The capture of transient scenes at high imaging speed has been long sought by photographers, with early examples being the well known recording in 1878 of a horse in motion and the 1887 photograph of a supersonic bullet. However, not until the late twentieth century were breakthroughs achieved in demonstrating ultrahigh-speed imaging (more than 10^5 frames per second). In particular, the introduction of electronic imaging sensors based on the charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) technology revolutionized high-speed photography, enabling acquisition rates of up to 10^7 frames per second. Despite these sensors’ widespread impact, further increasing frame rates using CCD or CMOS technology is fundamentally limited by their on-chip storage and electronic readout speed. Here we demonstrate a two-dimensional dynamic imaging technique, compressed ultrafast photography (CUP), which can capture non-repetitive time-evolving events at up to 10^(11) frames per second. Compared with existing ultrafast imaging techniques, CUP has the prominent advantage of measuring an x–y–t (x, y, spatial coordinates; t, time) scene with a single camera snapshot, thereby allowing observation of transient events with temporal resolution as tens of picoseconds. Furthermore, akin to traditional photography, CUP is receive-only, and so does not need the specialized active illumination required by other single-shot ultrafast imagers. As a result, CUP can image a variety of luminescent—such as fluorescent or bioluminescent—objects. Using CUP, we visualize four fundamental physical phenomena with single laser shots only: laser pulse reflection and refraction, photon racing in two media, and faster-than-light propagation of non-information (that is, motion that appears faster than the speed of light but cannot convey information). Given CUP’s capability, we expect it to find widespread applications in both fundamental and applied sciences, including biomedical research

    Snapshot photoacoustic topography through an ergodic relay of optical absorption in vivo

    Get PDF
    Photoacoustic tomography (PAT) has demonstrated versatile biomedical applications, ranging from tracking single cells to monitoring whole-body dynamics of small animals and diagnosing human breast cancer. Currently, PAT has two major implementations: photoacoustic computed tomography (PACT) and photoacoustic microscopy (PAM). PACT uses a multi-element ultrasonic array for parallel detection, which is relatively complex and expensive. In contrast, PAM requires point-by-point scanning with a single-element detector, which has a limited imaging throughput. The trade-off between the system cost and throughput demands a new imaging method. To this end, we have developed photoacoustic topography through an ergodic relay (PATER). PATER can capture a wide-field image with only a single-element ultrasonic detector upon a single laser shot. This protocol describes the detailed procedures for PATER system construction, including component selection, equipment setup and system alignment. A step-by-step guide for in vivo imaging of a mouse brain is provided as an example application. Data acquisition, image reconstruction and troubleshooting procedures are also elaborated. It takes ~130 min to carry out this protocol, including ~60 min for both calibration and snapshot wide-field data acquisition using a laser with a 2-kHz pulse repetition rate. PATER offers low-cost snapshot wide-field imaging of fast dynamics, such as visualizing blood pulse wave propagation and tracking melanoma tumor cell circulation in mice in vivo. We envision that PATER will have wide biomedical applications and anticipate that the compact size of the setup will allow it to be further developed as a wearable device to monitor human vital signs

    Airborne Object Detection Using Hyperspectral Imaging: Deep Learning Review

    Full text link
    © 2019, Springer Nature Switzerland AG. Hyperspectral images have been increasingly important in object detection applications especially in remote sensing scenarios. Machine learning algorithms have become emerging tools for hyperspectral image analysis. The high dimensionality of hyperspectral images and the availability of simulated spectral sample libraries make deep learning an appealing approach. This report reviews recent data processing and object detection methods in the area including hand-crafted and automated feature extraction based on deep learning neural networks. The accuracy performances were compared according to existing reports as well as our own experiments (i.e., re-implementing and testing on new datasets). CNN models provided reliable performance of over 97% detection accuracy across a large set of HSI collections. A wide range of data were used: a rural area (Indian Pines data), an urban area (Pavia University), a wetland region (Botswana), an industrial field (Kennedy Space Center), to a farm site (Salinas). Note that, the Botswana set was not reviewed in recent works, thus high accuracy selected methods were newly compared in this work. A plain CNN model was also found to be able to perform comparably to its more complex variants in target detection applications

    Nonmonotone Barzilai-Borwein Gradient Algorithm for ℓ1\ell_1-Regularized Nonsmooth Minimization in Compressive Sensing

    Full text link
    This paper is devoted to minimizing the sum of a smooth function and a nonsmooth ℓ1\ell_1-regularized term. This problem as a special cases includes the ℓ1\ell_1-regularized convex minimization problem in signal processing, compressive sensing, machine learning, data mining, etc. However, the non-differentiability of the ℓ1\ell_1-norm causes more challenging especially in large problems encountered in many practical applications. This paper proposes, analyzes, and tests a Barzilai-Borwein gradient algorithm. At each iteration, the generated search direction enjoys descent property and can be easily derived by minimizing a local approximal quadratic model and simultaneously taking the favorable structure of the ℓ1\ell_1-norm. Moreover, a nonmonotone line search technique is incorporated to find a suitable stepsize along this direction. The algorithm is easily performed, where the values of the objective function and the gradient of the smooth term are required at per-iteration. Under some conditions, the proposed algorithm is shown to be globally convergent. The limited experiments by using some nonconvex unconstrained problems from CUTEr library with additive ℓ1\ell_1-regularization illustrate that the proposed algorithm performs quite well. Extensive experiments for ℓ1\ell_1-regularized least squares problems in compressive sensing verify that our algorithm compares favorably with several state-of-the-art algorithms which are specifically designed in recent years.Comment: 20 page

    High-resolution hyperspectral image fusion based on spectral unmixing

    Get PDF
    This paper presents a high-resolution hyperspectral image fusion algorithm based on spectral unmixing. The widely used linear observation model (with additive Gaussian noise) is combined with the linear spectral mixture model to form the data terms. The non-negativity and sum-to-one constraints, resulting from the intrinsic physical properties of the abundances (i.e., fractions of the materials contained in each pixel), are introduced to regularize the ill-posed image fusion problem. The joint fusion and unmixing problem is formulated as the minimization of a cost function with respect to the mixing matrix (which contains the spectral signatures of the pure material, referred to as endmembers) , and the abundance maps, with non-negativity and sum-to-one constraints. This optimization problem is attacked with an alternating optimization strategy. The two resulting sub-problems are convex and are solved efficiently using the alternating direction method of multipliers. Simulation results, including comparisons with the state-of-the-art, document the effectiveness and competitiveness of the proposed unmixing based fusion algorithm

    Generalized Conjugate Gradient Methods for ℓ

    No full text
    • 

    corecore